2,215 research outputs found

    Nonlinear estimation for linear inverse problems with error in the operator

    Full text link
    We study two nonlinear methods for statistical linear inverse problems when the operator is not known. The two constructions combine Galerkin regularization and wavelet thresholding. Their performances depend on the underlying structure of the operator, quantified by an index of sparsity. We prove their rate-optimality and adaptivity properties over Besov classes.Comment: Published in at http://dx.doi.org/10.1214/009053607000000721 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    On adaptive inference and confidence bands

    Full text link
    The problem of existence of adaptive confidence bands for an unknown density ff that belongs to a nested scale of H\"{o}lder classes over R\mathbb{R} or [0,1][0,1] is considered. Whereas honest adaptive inference in this problem is impossible already for a pair of H\"{o}lder balls Σ(r),Σ(s),r≠s\Sigma(r),\Sigma(s),r\ne s, of fixed radius, a nonparametric distinguishability condition is introduced under which adaptive confidence bands can be shown to exist. It is further shown that this condition is necessary and sufficient for the existence of honest asymptotic confidence bands, and that it is strictly weaker than similar analytic conditions recently employed in Gin\'{e} and Nickl [Ann. Statist. 38 (2010) 1122--1170]. The exceptional sets for which honest inference is not possible have vanishingly small probability under natural priors on H\"{o}lder balls Σ(s)\Sigma(s). If no upper bound for the radius of the H\"{o}lder balls is known, a price for adaptation has to be paid, and near-optimal adaptation is possible for standard procedures. The implications of these findings for a general theory of adaptive inference are discussed.Comment: Published in at http://dx.doi.org/10.1214/11-AOS903 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Nonparametric estimation of scalar diffusions based on low frequency data

    Full text link
    We study the problem of estimating the coefficients of a diffusion (X_t,t\geq 0); the estimation is based on discrete data X_{n\Delta},n=0,1,...,N. The sampling frequency \Delta^{-1} is constant, and asymptotics are taken as the number N of observations tends to infinity. We prove that the problem of estimating both the diffusion coefficient (the volatility) and the drift in a nonparametric setting is ill-posed: the minimax rates of convergence for Sobolev constraints and squared-error loss coincide with that of a, respectively, first- and second-order linear inverse problem. To ensure ergodicity and limit technical difficulties we restrict ourselves to scalar diffusions living on a compact interval with reflecting boundary conditions. Our approach is based on the spectral analysis of the associated Markov semigroup. A rate-optimal estimation of the coefficients is obtained via the nonparametric estimation of an eigenvalue-eigenfunction pair of the transition operator of the discrete time Markov chain (X_{n\Delta},n=0,1,...,N) in a suitable Sobolev norm, together with an estimation of its invariant density.Comment: Published at http://dx.doi.org/10.1214/009053604000000797 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Early stopping for statistical inverse problems via truncated SVD estimation

    Full text link
    We consider truncated SVD (or spectral cut-off, projection) estimators for a prototypical statistical inverse problem in dimension DD. Since calculating the singular value decomposition (SVD) only for the largest singular values is much less costly than the full SVD, our aim is to select a data-driven truncation level m^∈{1,…,D}\widehat m\in\{1,\ldots,D\} only based on the knowledge of the first m^\widehat m singular values and vectors. We analyse in detail whether sequential {\it early stopping} rules of this type can preserve statistical optimality. Information-constrained lower bounds and matching upper bounds for a residual based stopping rule are provided, which give a clear picture in which situation optimal sequential adaptation is feasible. Finally, a hybrid two-step approach is proposed which allows for classical oracle inequalities while considerably reducing numerical complexity.Comment: slightly modified version. arXiv admin note: text overlap with arXiv:1606.0770

    On adaptive posterior concentration rates

    Full text link
    We investigate the problem of deriving posterior concentration rates under different loss functions in nonparametric Bayes. We first provide a lower bound on posterior coverages of shrinking neighbourhoods that relates the metric or loss under which the shrinking neighbourhood is considered, and an intrinsic pre-metric linked to frequentist separation rates. In the Gaussian white noise model, we construct feasible priors based on a spike and slab procedure reminiscent of wavelet thresholding that achieve adaptive rates of contraction under L2L^2 or L∞L^{\infty} metrics when the underlying parameter belongs to a collection of H\"{o}lder balls and that moreover achieve our lower bound. We analyse the consequences in terms of asymptotic behaviour of posterior credible balls as well as frequentist minimax adaptive estimation. Our results are appended with an upper bound for the contraction rate under an arbitrary loss in a generic regular experiment. The upper bound is attained for certain sieve priors and enables to extend our results to density estimation.Comment: Published at http://dx.doi.org/10.1214/15-AOS1341 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • …
    corecore